Large Language Models Are More Persuasive Than Incentivized Human Persuaders

Persuasion
Artificial Intelligence
Machine Learning
Authors
Affiliations
Philipp Schoenegger

London School of Economics and Political Science

Francesco Salvi

EPFL

Jiacheng Liu

Purdue University

Xiaoli Nan

Unversity of Maryland

Ramit Debnath

University of Cambridge

Barbara Fasolo

London School of Economics and Political Science

Evelina Leivada

Autonomous University of Barcelona

Gabriel Recchia

Modulo Research

Fritz Günther

Humboldt-Universität zu Berlin

Ali Zarifhonarvar

Indiana University

Joe Kwon

MIT

Zahoor Ul Islam

Umea University

Marco Dehnert

University of Arkansas

Daryl Y. H. Lee

University College London

Madeline G. Reinecke

University of Oxford

David G. Kamper

University of California, Los Angeles

New York University

Adam Sandford

University of Guelph-Humber

Jonas Kgomo

Equiano Institute

Luke Hewitt

Stanford University

Shreya Kapoor

Friedrich-Alexander-Universität Erlangen-Nürnberg

Kerem Oktar

Princeton University

Eyup Engin Kucuk

MIT

Bo Feng

Georgia Institute of Technology

Cameron R. Jones

UC San Diego

Izzy Gainsburg

Stanford University

Sebastian Olschewski

University of Basel

Nora Heinzelmann

Heidelberg University

Francisco Cruz

Universidade de Lisboa

Ben M. Tappin

London School of Economics and Political Science

Tao Ma

London School of Economics and Political Science

Peter S. Park

MIT

Rayan Onyonka

University of Leeds

Arthur Hjorth

Aarhus University

Peter Slattery

MIT

Qingcheng Zeng

Northwestern University

Lennart Finke

ETH Zurich

Igor Grossmann

University of Waterloo

Alessandro Salatiello

University of Tübingen

Ezra Karger
Published

May 21, 2025


We directly compare the persuasion capabilities of a frontier large language model (LLM; Claude Sonnet 3.5) against incentivized human persuaders in an interactive, real-time conversational quiz setting. In this preregistered, large-scale incentivized experiment, participants (quiz takers) completed an online quiz where persuaders (either humans or LLMs) attempted to persuade quiz takers toward correct or incorrect answers. We find that LLM persuaders achieved significantly higher compliance with their directional persuasion attempts than incentivized human persuaders, demonstrating superior persuasive capabilities in both truthful (toward correct answers) and deceptive (toward incorrect answers) contexts. We also find that LLM persuaders significantly increased quiz takers’ accuracy, leading to higher earnings, when steering quiz takers toward correct answers, and significantly decreased their accuracy, leading to lower earnings, when steering them toward incorrect answers. Overall, our findings suggest that AI’s persuasion capabilities already exceed those of humans that have real-money bonuses tied to performance. Our findings of increasingly capable AI persuaders thus underscore the urgency of emerging alignment and governance frameworks.

Click here to read!